In this example, we'll build a full-stack application that uses Retrieval Augmented Generation (RAG) to deliver accurate and contextually relevant responses in a chatbot.
RAG is a useful pattern that allows us to provide an LLM with context, such as data from Smart Search to give more relevant answers to queries from the consumer.
We will demonstrate how to embed a custom chatbot application using a NextJS app deployed on WP Engine’s node engine and utilising smart search to provide context for the given WordPress site.
Here is a shortlist of tasks to complete:
gpt-3.5-turbo, gpt-4o)Let’s enable WP Engine Smart Search a WP Engine environment, you can follow the Enable documentation to get WP Engine Smart Search added to your environment.
For this example I have created a site called smartsearchrag and have added a WP Engine Smart Search license to it.
Next navigate to WP Admin for the environment:
Then navigate to the AI-Powered Hybrid Search page, enable the feature and select the post_content field in the fields section, we are going to use this field as our AI Powered field for semantic searches.
Save the configuration and head on over to the Sync page then click Index now:
Leave the indexing operation complete and move onto the next step.
We will now build our chatbot application in NextJS, here is what the final output should look like:
We need to create a new NextJS application following the command below and answering the prompts as follows:
npx create-next-app next-chatbot-smart-search
Lets navigate to the newly created application directory:
cd next-chatbot-smart-search
This is what the barebones NextJS application structure should look like:
Lets run the application:
npm run dev
Then we can preview the base application in the browser at http://localhost:3000
In the next sections, we'll enhance our Next.js application by implementing a utility function for fetching context from WP Engine Smart Search using Semantic Search, creating an API endpoint for our chat UI, and designing the UI components to display the chat interface.
1. Utility Function for Semantic Search
First, we'll create a utility function to interact with WP Engine's Smart Search. This function will leverage semantic search capabilities to fetch relevant context.
2. API Endpoint for Chat UI
Next, we'll set up an API endpoint in Next.js to handle requests from our chat UI. This endpoint will use the utility function we just created to get context from WP Engine Smart Search.
3. UI Components for Chat Interface
Finally, let's build the UI components to display the chat interface. We'll create a chat input field and a message display area, handling user inputs and displaying the fetched context.
We are going to create a file that will be responsible for fetching data from Smart Search using the Semantic Search option in the API.
First, we'll create a file named src/utils/context.ts in the utils directory. This file will contain the logic for making the request to the WP Engine Smart Search API using Semantic Search.
// src/utils/context.ts
// The function `getContext` is used to retrieve the context of a given message
export const getContext = async (
message: string
): Promise<Record<string, any>> => {
const url = process.env.SMART_SEARCH_URL ?? "";
const token = process.env.SMART_SEARCH_ACCESS_TOKEN ?? "";
const query = `
{
find(
query: "${message}"
semanticSearch: { searchBias: 10, fields: ["post_content"] }
) {
total
documents {
id
score
data
}
}
}
`;
const response = await fetch(url, {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${token}`,
},
body: JSON.stringify({ query }),
});
const data = await response.json();
return data;
};
WP Engine Smart Search provides a GraphQL API for seamless interaction. This API allows you to index, search, and delete documents efficiently.
To make an API call, you need to provide a SMART_SEARCH_URL and a SMART_SEARCH_ACCESS_TOKEN, which can be found in the user portal.
Next let’s create our chat endpoint for our chat UI to interact with. The goal of this endpoint is to
Create a new file at src/api/chat/route.ts
Next add the following npm modules:
npm install --save ai @ai-sdk/openai
Then import them in our route.ts file:
// IMPORTANT! Set the runtime to edge
export const runtime = "edge";
import { CoreMessage, streamText } from "ai";
import { createOpenAI } from "@ai-sdk/openai";
Next lets configure the OpenAI client:
const openai = createOpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
Next lets flesh out the endpoint:
export async function POST(req: Request) {
try {
const { messages } = await req.json();
// Get the last sent user message
const lastMessage = messages[messages.length - 1];
// Get the last 10 messages not including the latest
const previousMessages = messages.slice(-11, -1);
// Get the context from the last user message
// This is the function that calls out to WP Engine Smart Search
const context = await getContext(lastMessage.content);
// Map the context into a friendly text stream for the prompt
const messageContext = context.data.find.documents.map((doc: any) => {
return `
ID: ${doc.id}
Title: ${doc.data.post_title}
Content: ${doc.data.post_content}
SearchScore: ${doc.score}
`;
});
// Construcut our prompt
const prompt: CoreMessage = {
role: "assistant",
content: `AI assistant is a brand new, powerful, human-like artificial intelligence.
The traits of AI include expert knowledge, helpfulness, cleverness, and articulateness.
AI is a well-behaved and well-mannered individual.
AI is always friendly, kind, and inspiring, and he is eager to provide vivid and thoughtful responses to the user.
AI has the sum of all knowledge in their brain, and is able to accurately answer nearly any question about any topic in conversation.
AI assistant is a big fan of WP Engine Smart Search.
AI assistant uses WP Engine Smart Search to provide the most accurate and relevant information to the user.
AI assistant data from WP Engine Smart Search is based on TV Shows.
START CONTEXT BLOCK
${messageContext.join("----------------\n\n")}
END OF CONTEXT BLOCK
START OF HISTORY BLOCK
${JSON.stringify(previousMessages)}
END OF HISTORY BLOCK
AI assistant will take into account any CONTEXT BLOCK that is provided in a conversation.
AI assistant will take into account any HISTORY BLOCK that is provided in a conversation.
If the context does not provide the answer to question, the AI assistant will say, "I'm sorry, but I don't know the answer to that question".
AI assistant will not apologize for previous responses, but instead will indicated new information was gained.
AI assistant will not invent anything that is not drawn directly from the context.
AI assistant will answer coding questions.
`,
};
const response = await streamText({
// here we can specify which Open AI model to use
model: openai("gpt-4o"),
messages: [
prompt,
...messages.filter((message: CoreMessage) => message.role === "user"),
],
});
// Return a response stream to the client
// This is the magic that allows message streaming as soon
// as the LLM has generated a chunk of the response
return response.toAIStreamResponse();
} catch (e) {
throw e;
}
}
The chat component is responsible for rendering the message input as well as the Messages component which handles the rendering of user and assistant messages.
Lets create the chat interface src/app/components/Chat.tsx
"use client";
import React, { ChangeEvent } from "react";
import Messages from "./Messages";
import { Message } from "ai/react";
interface Chat {
input: string;
handleInputChange: (e: ChangeEvent<HTMLInputElement>) => void;
handleMessageSubmit: (e: React.FormEvent<HTMLFormElement>) => void;
messages: Message[];
}
const Chat: React.FC<Chat> = ({
input,
handleInputChange,
handleMessageSubmit,
messages,
}) => {
return (
<div id="chat" className="flex flex-col w-full mx-2">
<Messages messages={messages} />
<form
onSubmit={handleMessageSubmit}
className="ml-1 mt-5 mb-5 relative bg-gray-500 rounded-lg"
>
<input
type="text"
className="input-glow appearance-none border rounded w-full py-2 px-3 text-gray-700 leading-tight focus:outline-none focus:shadow-outline pl-3 pr-10 bg-gray-100 border-gray-100 transition-shadow duration-200"
value={input}
onChange={handleInputChange}
/>
<span className="absolute inset-y-0 right-0 flex items-center pr-3 pointer-events-none text-gray-400">
Press ⮐ to send
</span>
</form>
</div>
);
};
export default Chat;
The messages component will be responsible for rendering user inputted messages and the LLM outputs.
Lets Create the Messages component src/app/components/Messages.tsx
Here we will add a markdown formatter to the open ai responses
npm install --save react-markdown
"use client";
import { Message } from "ai";
import { useEffect, useRef } from "react";
import ReactMarkdown from "react-markdown";
export default function Messages({ messages }: { messages: Message[] }) {
const messagesEndRef = useRef<HTMLDivElement | null>(null);
useEffect(() => {
messagesEndRef.current?.scrollIntoView({ behavior: "smooth" });
}, [messages]);
return (
<div
className="border-1 border-gray-100 overflow-y-scroll flex-grow flex-col justify-end p-1"
style={{ scrollbarWidth: "none" }}
>
{messages.map((msg, index) => (
<div
key={index}
className={`${
msg.role === "assistant" ? "bg-green-500" : "bg-blue-500"
} my-2 p-3 shadow-md hover:shadow-lg transition-shadow duration-200 flex slide-in-bottom bg-blue-500 border border-gray-900 message-glow`}
>
<div className="ml- rounded-tl-lg p-2 border-r flex items-center">
{msg.role === "assistant" ? "🤖" : "🧅"}
</div>
<div className="ml-2 text-white">
<ReactMarkdown>{msg.content}</ReactMarkdown>
</div>
</div>
))}
<div ref={messagesEndRef} />
</div>
);
}
Wrapping up lets modify the main src/app/page.tsx to incorporate our Chat component
"use client";
import Chat from "./components/Chat";
import { useChat } from "ai/react";
import { useEffect } from "react";
const Page: React.FC = () => {
const { messages, input, handleInputChange, handleSubmit, setMessages } =
useChat();
useEffect(() => {
if (messages.length < 1) {
setMessages([
{
role: "assistant",
content: "Welcome to the Smart Search chatbot!",
id: "welcome",
},
]);
}
}, [messages, setMessages]);
return (
<div className="flex flex-col justify-between h-screen bg-white mx-auto max-w-full">
<div className="flex w-full flex-grow overflow-hidden relative">
<Chat
input={input}
handleInputChange={handleInputChange}
handleMessageSubmit={handleSubmit}
messages={messages}
/>
</div>
</div>
);
};
export default Page;
Update metadata on src/app/layout.tsx
import type { Metadata } from "next";
import { Inter } from "next/font/google";
import "./globals.css";
const inter = Inter({ subsets: ["latin"] });
export const metadata: Metadata = {
title: "Smart Search RAG",
description: "Generated by create next app",
};
export default function RootLayout({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
return (
<html lang="en">
<body className={inter.className}>{children}</body>
</html>
);
}
Finally update src/app/globals.css
@import "tailwindcss/base";
@import "tailwindcss/components";
@import "tailwindcss/utilities";
@keyframes slideInFromBottom {
0% {
transform: translateY(100%);
opacity: 0;
}
100% {
transform: translateY(0);
opacity: 1;
}
}
.slide-in-bottom {
animation: slideInFromBottom 0.3s ease-out;
}
.input-glow {
box-shadow: 0 0 3px #b2bfd7, 0 0 5px #b2bfd7;
}
.input-glow:hover {
box-shadow: 0 0 5px #87f4f6, 0 0 10px #8b9ebe;
}
.message-glow {
box-shadow: 0 0 3px #b2bfd7, 0 0 5px #b2bfd7;
}
.message-glow:hover {
box-shadow: 0 0 3px #75e7e9, 0 0 4px #8b9ebe;
}
@keyframes glimmer {
0% {
background-position: -200px;
}
100% {
background-position: calc(200px + 100%);
}
}
@keyframes shimmer {
0% {
transform: translateX(-100%);
}
100% {
transform: translateX(100%);
}
}
.shimmer {
animation: glimmer 2s infinite linear;
background: rgb(82, 82, 91);
background: linear-gradient(to right,
darkgray 10%,
rgb(130, 129, 129) 50%,
rgba(124, 123, 123, 0.816) 90%);
background-size: 200px 100%;
background-repeat: no-repeat;
/* color: transparent; */
}
@keyframes pulse {
0%,
100% {
color: white;
}
50% {
color: #f59e0b;
/* Tailwind's yellow-500 */
}
}
.animate-pulse-once {
animation: pulse 5s cubic-bezier(0, 0, 0.2, 1) 1;
}
To seamlessly integrate our chat bot into a WordPress site, we'll create two essential files: embed.js and embed.css. These files will reside in the public directory of our Next.js application. By doing this, we can easily embed the chat bot using a simple script tag in the WordPress site.
embed.jsFirst, let's create the embed.js file in the public directory. This JavaScript file will handle the initialization and toggling of the chat bot.
// public/embed.js
(function () {
const scriptUrl = new URL(document.currentScript.src);
const baseUrl = `${scriptUrl.protocol}//${scriptUrl.host}`;
function createChatIcon() {
var chatIcon = document.createElement("div");
chatIcon.id = "chat-icon";
chatIcon.innerHTML = "Chat";
chatIcon.addEventListener("click", function (event) {
event.stopPropagation();
toggleChatIframe();
});
document.body.appendChild(chatIcon);
}
function toggleChatIframe() {
var iframe = document.getElementById("chat-iframe");
iframe.classList.toggle("hidden");
}
function renderChatIframe() {
iframe = document.createElement("iframe");
iframe.id = "chat-iframe";
iframe.src = baseUrl; // Replace with your chat URL
iframe.classList.add("hidden");
document.body.appendChild(iframe);
}
function loadCss() {
const link = document.createElement("link");
link.rel = "stylesheet";
link.href = `${baseUrl}/embed.css`;
document.head.appendChild(link);
}
function handleClose() {
document.addEventListener("click", function () {
var iframe = document.getElementById("chat-iframe");
if (iframe.classList.contains("hidden")) return;
iframe.classList.add("hidden");
});
}
loadCss();
renderChatIframe();
createChatIcon();
handleClose();
})();
Explanation:
embed.cssNext, let's create the embed.css file in the public directory. This CSS file will style the chat icon and the chat iframe.
/* public/embed.css */
#chat-icon {
position: fixed;
display: flex;
align-items: center;
justify-content: center;
bottom: 10px;
right: 10px;
width: 70px;
height: 70px;
border-radius: 50%;
background-color: rgb(59 130 246);
color: white;
font-size: 12px;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.5);
}
#chat-icon:hover {
cursor: pointer;
background-color: rgb(39, 90, 216);
animation: hoverAnimation 1s;
}
@keyframes hoverAnimation {
0% {
background-color: rgb(59, 130, 246);
}
100% {
background-color: rgb(39, 90, 216);
}
}
#chat-iframe {
z-index: 9999;
position: fixed;
bottom: 113px;
width: 600px;
height: 60%;
right: 10px;
border: 1px solid #ccc;
box-shadow: -1px 2px 4px rgba(0, 0, 0, 0.5);
border-radius: 10px;
}
.hidden {
display: none;
}
Explanation:
To embed the chat bot on a WordPress site, add the following action header or footer of your WordPress theme:
add_action(
'wp_enqueue_scripts', function () {
wp_enqueue_script(
'embed-chatbot',
'http://{NEXT_JS_APP_URL}/embed.js',
array(), null, true
);
}
);
By adding embed.js and embed.css to the public directory, we create a straightforward method to embed our chat bot on a WordPress site. This approach ensures a smooth integration, providing a fully functional and styled chat bot with minimal effort.